live performance
mSCoRe: a $M$ultilingual and Scalable Benchmark for $S$kill-based $Co$mmonsense $Re$asoning
Ngo, Nghia Trung, Dernoncourt, Franck, Nguyen, Thien Huu
Recent advancements in reasoning-reinforced Large Language Models (LLMs) have shown remarkable capabilities in complex reasoning tasks. However, the mechanism underlying their utilization of different human reasoning skills remains poorly investigated, especially for multilingual commonsense reasoning that involves everyday knowledge across different languages and cultures. To address this gap, we propose a \textbf{M}ultilingual and Scalable Benchmark for \textbf{S}kill-based \textbf{Co}mmonsense \textbf{Re}asoning (\textbf{mSCoRe}). Our benchmark incorporates three key components that are designed to systematically evaluate LLM's reasoning capabilities, including: (1) a novel taxonomy of reasoning skills that enables fine-grained analysis of models' reasoning processes, (2) a robust data synthesis pipeline tailored specifically for commonsense reasoning evaluation, and (3) a complexity scaling framework allowing task difficulty to scale dynamically alongside future improvements in LLM abilities. Extensive experiments on eights state-of-the-art LLMs of varying sizes and training approaches demonstrate that \textbf{mSCoRe} remains significantly challenging for current models, particularly at higher complexity levels. Our results reveal the limitations of such reasoning-reinforced models when confronted with nuanced multilingual general and cultural commonsense. We further provide detailed analysis on the models' reasoning processes, suggesting future directions for improving multilingual commonsense reasoning capabilities.
- Europe > Austria > Vienna (0.14)
- North America > United States > Oregon > Lane County > Eugene (0.14)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- (12 more...)
- Workflow (1.00)
- Research Report > New Finding (0.66)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science > Problem Solving (1.00)
A Real-Time Gesture-Based Control Framework
Khazaei, Mahya, Bahrani, Ali, Tzanetakis, George
We introduce a real-time, human-in-the-loop gesture control framework that can dynamically adapt audio and music based on human movement by analyzing live video input. By creating a responsive connection between visual and auditory stimuli, this system enables dancers and performers to not only respond to music but also influence it through their movements. Designed for live performances, interactive installations, and personal use, it offers an immersive experience where users can shape the music in real time. The framework integrates computer vision and machine learning techniques to track and interpret motion, allowing users to manipulate audio elements such as tempo, pitch, effects, and playback sequence. With ongoing training, it achieves user-independent functionality, requiring as few as 50 to 80 samples to label simple gestures. This framework combines gesture training, cue mapping, and audio manipulation to create a dynamic, interactive experience. Gestures are interpreted as input signals, mapped to sound control commands, and used to naturally adjust music elements, showcasing the seamless interplay between human interaction and machine response.
- Media > Music (1.00)
- Leisure & Entertainment (1.00)
- Information Technology > Architecture (1.00)
- Information Technology > Artificial Intelligence > Vision > Gesture Recognition (0.93)
- Information Technology > Artificial Intelligence > Vision > Face Recognition (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
Musical Agent Systems: MACAT and MACataRT
Lee, Keon Ju M., Pasquier, Philippe
Our research explores the development and application of musical agents, human-in-the-loop generative AI systems designed to support music performance and improvisation within co-creative spaces. We introduce MACAT and MACataRT, two distinct musical agent systems crafted to enhance interactive music-making between human musicians and AI. MACAT is optimized for agent-led performance, employing real-time synthesis and self-listening to shape its output autonomously, while MACataRT provides a flexible environment for collaborative improvisation through audio mosaicing and sequence-based learning. Both systems emphasize training on personalized, small datasets, fostering ethical and transparent AI engagement that respects artistic integrity. This research highlights how interactive, artist-centred generative AI can expand creative possibilities, empowering musicians to explore new forms of artistic expression in real-time, performance-driven and music improvisation contexts.
- Europe > Switzerland > Zürich > Zürich (0.14)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.14)
- Asia > China > Zhejiang Province > Hangzhou (0.04)
- Media > Music (1.00)
- Leisure & Entertainment (1.00)
Biles
The author has been performing with GenJam, the Genetic Jammer, for nearly 20 years and has accumulated a wealth of experiences in performing live jazz with technology. This paper presents a discussion of the use of technology in jazz, both from the performer's and from the audience's perspective, and it proposes a classification scheme for live performance that is geared to mainstream performing situations.
Why Improvisation Is the Future in an AI-Dominated World
In his autobiography, Miles Davis complained that classical musicians were like robots. He spoke from experience – he'd studied classical music at Juilliard and recorded with classical musicians even after becoming a world-renowned jazz artist. As a music professor at the University of Florida, which is transforming itself into an "AI university," I often think about Davis' words, and the ways in which musicians have become more machinelike over the past century. At the same time, I see how machines have been getting better at mimicking human improvisation, in all aspects of life. I wonder what the limits of machine improvisation will be, and which human activities will survive the rise of intelligent machines.
- North America > United States > New York > New York County > New York City (0.05)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.05)
- Media > Music (1.00)
- Leisure & Entertainment > Games > Chess (0.71)
Fortnite: 'Millions attend' virtual Marshmello concert
While the Maroon 5 singer was courting controversy for his Super Bowl half-time show on Sunday, video game fans were revelling in the afterglow of a different live performance. "Millions attended" a concert on Saturday starring masked DJ Marshmello, who played a set including 2018 hits Everyday and Happier. The concert was entirely virtual, briefly turning Fortnite from third-person shooter into music venue. The first ever live virtual concert inside of @fortnite with millions of people in attendance. So insane, thank you epic games and everyone who made this possible!
- Information Technology > Communications > Social Media (0.58)
- Information Technology > Artificial Intelligence > Games (0.44)
Artificial Intelligence in the Concertgebouw
Arzt, Andreas (Johannes Kepler University Linz) | Frostel, Harald (Johannes Kepler University Linz) | Gadermaier, Thassilo (Austrian Research Institute for Artificial Intelligence) | Gasser, Martin (Austrian Research Institute for Artificial Intelligence) | Grachten, Maarten (Austrian Research Institute for Artificial Intelligence) | Widmer, Gerhard (Johannes Kepler University Linz)
In this paper we present a real-world application (the first of its kind) of machine listening in the context of a live concert in a world-famous concert hall - the Concertgebouw in Amsterdam. A real-time music tracking algorithm listens to the Royal Concertgebouw Orchestra performing Richard Strauss' Alpensinfonie and follows the progress in the sheet music, i.e., continuously tracks the most likely position of the live music in the printed score. This information, in turn, is used to enrich the concert experience for members of the audience by streaming synchronised visual content (the sheet music, explanatory text and videos) onto tablet computers in the concert hall. The main focus of this paper is on the challenges involved in tracking live orchestral music, i.e., how to deal with heavily polyphonic music, how to prepare the data needed, and how to achieve the necessary robustness and precision.
- Europe > Netherlands > North Holland > Amsterdam (0.24)
- Europe > Austria > Vienna (0.14)
- Asia > Taiwan > Taiwan Province > Taipei (0.05)
- (11 more...)
- Media > Music (1.00)
- Leisure & Entertainment (1.00)